25 research outputs found

    Robust Machine Learning In Computer Vision

    Get PDF
    Deep neural networks have been shown to be successful in various computer vision tasks such as image classification and object detection. Although deep neural networks have exceeded human performance in many tasks, robustness and reliability are always the concerns of using deep learning models. On the one hand, degraded images and videos aggravate the performance of computer vision tasks. On the other hand, if the deep neural networks are under adversarial attacks, the networks can be broken completely. Motivated by the vulnerability of deep neural networks, I analyze and develop image restoration and adversarial defense algorithms towards a vision of robust machine learning in computer vision. In this dissertation, I study two types of degradation making deep neural networks vulnerable. The first part of the dissertation focuses on face recognition at long range, whose performance is severely degraded by atmospheric turbulence. The theme is on improving the performance and robustness of various tasks in face recognition systems such as facial keypoints localization, feature extraction, and image restoration. The second part focuses on defending adversarial attacks in the images classification task. The theme is on exploring adversarial defense methods that can achieve good performance in standard accuracy, robustness to adversarial attacks with known threat models, and good generalization to other unseen attacks

    Attribute-Guided Encryption with Facial Texture Masking

    Full text link
    The increasingly pervasive facial recognition (FR) systems raise serious concerns about personal privacy, especially for billions of users who have publicly shared their photos on social media. Several attempts have been made to protect individuals from unauthorized FR systems utilizing adversarial attacks to generate encrypted face images to protect users from being identified by FR systems. However, existing methods suffer from poor visual quality or low attack success rates, which limit their usability in practice. In this paper, we propose Attribute Guided Encryption with Facial Texture Masking (AGE-FTM) that performs a dual manifold adversarial attack on FR systems to achieve both good visual quality and high black box attack success rates. In particular, AGE-FTM utilizes a high fidelity generative adversarial network (GAN) to generate natural on-manifold adversarial samples by modifying facial attributes, and performs the facial texture masking attack to generate imperceptible off-manifold adversarial samples. Extensive experiments on the CelebA-HQ dataset demonstrate that our proposed method produces more natural-looking encrypted images than state-of-the-art methods while achieving competitive attack performance. We further evaluate the effectiveness of AGE-FTM in the real world using a commercial FR API and validate its usefulness in practice through an user study

    DiffProtect: Generate Adversarial Examples with Diffusion Models for Facial Privacy Protection

    Full text link
    The increasingly pervasive facial recognition (FR) systems raise serious concerns about personal privacy, especially for billions of users who have publicly shared their photos on social media. Several attempts have been made to protect individuals from being identified by unauthorized FR systems utilizing adversarial attacks to generate encrypted face images. However, existing methods suffer from poor visual quality or low attack success rates, which limit their utility. Recently, diffusion models have achieved tremendous success in image generation. In this work, we ask: can diffusion models be used to generate adversarial examples to improve both visual quality and attack performance? We propose DiffProtect, which utilizes a diffusion autoencoder to generate semantically meaningful perturbations on FR systems. Extensive experiments demonstrate that DiffProtect produces more natural-looking encrypted images than state-of-the-art methods while achieving significantly higher attack success rates, e.g., 24.5% and 25.1% absolute improvements on the CelebA-HQ and FFHQ datasets.Comment: Code will be available at https://github.com/joellliu/DiffProtect

    Multi-Modal Human Authentication Using Silhouettes, Gait and RGB

    Full text link
    Whole-body-based human authentication is a promising approach for remote biometrics scenarios. Current literature focuses on either body recognition based on RGB images or gait recognition based on body shapes and walking patterns; both have their advantages and drawbacks. In this work, we propose Dual-Modal Ensemble (DME), which combines both RGB and silhouette data to achieve more robust performances for indoor and outdoor whole-body based recognition. Within DME, we propose GaitPattern, which is inspired by the double helical gait pattern used in traditional gait analysis. The GaitPattern contributes to robust identification performance over a large range of viewing angles. Extensive experimental results on the CASIA-B dataset demonstrate that the proposed method outperforms state-of-the-art recognition systems. We also provide experimental results using the newly collected BRIAR dataset

    Whole-body Detection, Recognition and Identification at Altitude and Range

    Full text link
    In this paper, we address the challenging task of whole-body biometric detection, recognition, and identification at distances of up to 500m and large pitch angles of up to 50 degree. We propose an end-to-end system evaluated on diverse datasets, including the challenging Biometric Recognition and Identification at Range (BRIAR) dataset. Our approach involves pre-training the detector on common image datasets and fine-tuning it on BRIAR's complex videos and images. After detection, we extract body images and employ a feature extractor for recognition. We conduct thorough evaluations under various conditions, such as different ranges and angles in indoor, outdoor, and aerial scenarios. Our method achieves an average F1 score of 98.29% at IoU = 0.7 and demonstrates strong performance in recognition accuracy and true acceptance rate at low false acceptance rates compared to existing models. On a test set of 100 subjects with 444 distractors, our model achieves a rank-20 recognition accuracy of 75.13% and a TAR@1%FAR of 54.09%

    Dataset on seston and zooplankton fatty-acid compositions, zooplankton and phytoplankton biomass, and environmental conditions of coastal and offshore waters of the northern Baltic Sea

    Get PDF
    We analyzed the taxonomic and fatty-acid (FA) compositions of phytoplankton and zooplankton, and the environmental conditions at three coastal and offshore stations of the northern Baltic Sea. Plankton samples for FA analyses were collected under the framework of sampling campaigns of the Swedish National Marine Monitoring program in September 2017. Monitoring data of phytoplankton and zooplankton biomass, and environmental variables at each station were extracted from the Swedish Meteorological and Hydrological Institute database (https://sharkweb.smhi.se/). Monthly phytoplankton biomass at each station in July-September 2017 was aggregated by class (i.e., chyrsophytes, cryptophytes, dinoflagellates, diatoms, euglenophytes, cyanobacteria, etc.). Zooplankton biomass in September 2017 was aggregated by major taxa (i.e., Acartia sp. [Calanoida], Eurytemora affinis [Calanoida], Cladocera, Limnocalanus macrurus and other copepods (i.e. excluding Eurytemora and Acartia)). Environmental variables monthly monitored in January-October 2017 included salinity, concentrations of dissolved organic carbon, humic substances, total nitrogen and total phosphorus. These variables were measured from 0 to 10 m depth below water surface, and the depth-integrated averages were used for data analyses. Seston and zooplankton (Eurytemora affinis, Acartia sp. and Cladocera) FA compositions were analyzed using gas chromatography and mass spectroscopy (GC–MS). Our dataset could provide new insights into how taxonomic composition and biochemical quality of the planktonic food chains change with the environmental conditions in subarctic marine ecosystems
    corecore